8 research outputs found

    Identification of claudin-4 as a marker highly overexpressed in both primary and metastatic prostate cancer

    Get PDF
    In the quest for markers of expression and progression for prostate cancer (PCa), the majority of studies have focussed on molecular data exclusively from primary tumours. Although expression in metastases is inferred, a lack of correlation with secondary tumours potentially limits their applicability diagnostically and therapeutically. Molecular targets were identified by examining expression profiles of prostate cell lines using cDNA microarrays. Those genes identified were verified on PCa cell lines and tumour samples from both primary and secondary tumours using real-time RT–PCR, western blotting and immunohistochemistry. Claudin-4, coding for an integral membrane cell-junction protein, was the most significantly (P<0.00001) upregulated marker in both primary and metastatic tumour specimens compared with benign prostatic hyperplasia at both RNA and protein levels. In primary tumours, claudin-4 was more highly expressed in lower grade (Gleason 6) lesions than in higher grade (Gleason ⩾7) cancers. Expression was prominent throughout metastases from a variety of secondary sites in fresh-frozen and formalin-fixed specimens from both androgen-intact and androgen-suppressed patients. As a result of its prominent expression in both primary and secondary PCas, together with its established role as a receptor for Clostridium perfringens enterotoxin, claudin-4 may be useful as a potential marker and therapeutic target for PCa metastases

    Automatic Boolean query formulation for systematic review literature search

    No full text
    Formulating Boolean queries for systematic review literature search is a challenging task. Commonly, queries are formulated by information specialists using the protocol specified in the review and interactions with the research team. Information specialists have in-depth experience on how to formulate queries in this domain, but may not have in-depth knowledge about the reviews' topics. Query formulation requires a significant amount of time and effort, and is performed interactively; specialists repeatedly formulate queries, attempt to validate their results, and reformulate specific Boolean clauses. In this paper, we investigate the possibility of automatically formulating a Boolean query from the systematic review protocol. We propose a novel five-step approach to automatic query formulation, specific to Boolean queries in this domain, which approximates the process by which information specialists formulate queries. In this process, we use syntax parsing to derive the logical structure of high-level concepts in a query, automatically extract and map concepts to entities in order to perform entity expansion, and finally apply post-processing operations (such as stemming and search filters). Automatic query formulation for systematic review literature search has several benefits: (i) it can provide reviewers with an indication of the types of studies that will be retrieved, without the involvement of an information specialist, (ii) it can provide information specialists with an initial query to begin the formulation process, (iii) it can provide researchers that perform rapid reviews with a method to quickly perform searches

    Overview of the CLEF eHealth Evaluation Lab 2019

    No full text
    In this paper, we provide an overview of the seventh annual edition of the CLEF eHealth evaluation lab. CLEF eHealth 2019 continues our evaluation resource building efforts around the easing and support of patients, their next-of-kins, clinical staff, and health scientists in understanding, accessing, and authoring electronic health information in a multilingual setting. This year’s lab advertised three tasks: Task 1 on indexing non-technical summaries of German animal experiments with International Classification of Diseases, Version 10 codes; Task 2 on technology assisted reviews in empirical medicine building on 2017 and 2018 tasks in English; and Task 3 on consumer health search in mono- and multilingual settings that builds on the 2013–18 Information Retrieval tasks. In total nine teams took part in these tasks (six in Task 1 and three in Task 2). Herein, we describe the resources created for these tasks and evaluation methodology adopted. We also provide a brief summary of participants of this year’s challenges and results obtained. As in previous years, the organizers have made data and tools associated with the lab tasks available for future research and development
    corecore